10 research outputs found

    Machine Learning and Pattern Recognition Methods for Remote Sensing Image Registration and Fusion

    Get PDF
    In the last decade, the remote sensing world has dramatically evolved. New types of sensor, each one collecting data with possibly different modalities, have been designed, developed, and deployed. Moreover, new missions have been planned and launched, aimed not only at collecting data of the Earth's surface, but also at acquiring planetary data in support of the study of the whole Solar system. Indeed, such a variety of technologies highlights the need for automatic methods able to effectively exploit all the available information. In the last years, lot of effort has been put in the design and development of advanced data fusion methods able to extract and make use of all the information available from as many complementary information sources as possible. Indeed, the goal of this thesis is to present novel machine learning and pattern recognition methodologies designed to support the exploitation of diverse sources of information, such as multisensor, multimodal, or multiresolution imagery. In this context, image registration plays a major role as is allows bringing two or more digital images into precise alignment for analysis and comparison. Here, image registration is tackled using both feature-based and area-based strategies. In the former case, the features of interest are extracted using a stochastic geometry model based on marked point processes, while, in the latter case, information theoretic functionals and the domain adaptation capabilities of generative adversarial networks are exploited. In addition, multisensor image registration is also applied in a large scale scenario by introducing a tiling-based strategy aimed at minimizing the computational burden, which is usually heavy in the multisensor case due to the need for information theoretic similarity measures. Moreover, automatic change detection with multiresolution and multimodality imagery is addressed via a novel Markovian framework based on a linear mixture model and on an ad-hoc multimodal energy function minimized using graph cuts or belied propagation methods. The statistics of the data at the various spatial scales is modelled through appropriate generalized Gaussian distributions and by iteratively estimating a set of virtual images, at the finest resolution, representing the data that would have been collected in case all the sensors worked at that resolution. All such methodologies have been experimentally evaluated with respect to different datasets, and with particular focus on the trade-off between the achievable performances and the demands in terms of computational resources. Moreover, such methods are also compared with state-of-the-art solutions, and are analyzed in terms of future developments, giving insights to possible future lines of research in this field

    A goal-driven unsupervised image segmentation method combining graph-based processing and Markov random fields

    Get PDF
    Image segmentation is the process of partitioning a digital image into a set of homogeneous regions (according to some homogeneity criterion) to facilitate a subsequent higher-level analysis. In this context, the present paper proposes an unsupervised and graph-based method of image segmentation, which is driven by an application goal, namely, the generation of image segments associated with a user-defined and application-specific goal. A graph, together with a random grid of source elements, is defined on top of the input image. From each source satisfying a goal-driven predicate, called seed, a propagation algorithm assigns a cost to each pixel on the basis of similarity and topological connectivity, measuring the degree of association with the reference seed. Then, the set of most significant regions is automatically extracted and used to estimate a statistical model for each region. Finally, the segmentation problem is expressed in a Bayesian framework in terms of probabilistic Markov random field (MRF) graphical modeling. An ad hoc energy function is defined based on parametric models, a seed-specific spatial feature, a background-specific potential, and local-contextual information. This energy function is minimized through graph cuts and, more specifically, the alpha-beta swap algorithm, yielding the final goal-driven segmentation based on the maximum a posteriori (MAP) decision rule. The proposed method does not require deep a priori knowledge (e.g., labelled datasets), as it only requires the choice of a goal-driven predicate and a suited parametric model for the data. In the experimental validation with both magnetic resonance (MR) and synthetic aperture radar (SAR) images, the method demonstrates robustness, versatility, and applicability to different domains, thus allowing for further analyses guided by the generated product

    A markovian approach to unsupervised change detection with multiresolution and multimodality SAR data

    Get PDF
    In the framework of synthetic aperture radar (SAR) systems, current satellite missions make it possible to acquire images at very high and multiple spatial resolutions with short revisit times. This scenario conveys a remarkable potential in applications to, for instance, environmental monitoring and natural disaster recovery. In this context, data fusion and change detection methodologies play major roles. This paper proposes an unsupervised change detection algorithmfor the challenging case of multimodal SAR data collected by sensors operating atmultiple spatial resolutions. The method is based on Markovian probabilistic graphical models, graph cuts, linear mixtures, generalized Gaussian distributions, Gram-Charlier approximations, maximum likelihood and minimum mean squared error estimation. It benefits from the SAR images acquired at multiple spatial resolutions and with possibly different modalities on the considered acquisition times to generate an output change map at the finest observed resolution. This is accomplished by modeling the statistics of the data at the various spatial scales through appropriate generalized Gaussian distributions and by iteratively estimating a set of virtual images that are defined on the pixel grid at the finest resolution and would be collected if all the sensors could work at that resolution. A Markov random field framework is adopted to address the detection problem by defining an appropriate multimodal energy function that is minimized using graph cuts

    Planetary Crater Detection and Registration Using Marked Point Processes, Multiple Birth and Death Algorithms, and Region-Based Analysis

    Get PDF
    Because of the large variety of sensors and spacecraft collecting data, planetary science needs to integrate various multi-sensor and multi-temporal images. These multiple data represent a precious asset, as they allow the study of targets spectral responses and of changes in the surface structure; because of their variety, they also require accurate and robust registration. A new crater detection algorithm, used to extract features that will be integrated in an image registration framework, is presented. A marked point process-based method has been developed to model the spatial distribution of elliptical objects (i.e. the craters) and a birth-death Markov chain Monte Carlo method, coupled with a region-based scheme aiming at computational efficiency, is used to find the optimal configuration fitting the image. The extracted features are exploited, together with a newly defined fitness function based on a modified Hausdorff distance, by an image registration algorithm whose architecture has been designed to minimize the computational time

    Registration of Multisensor Images through a Conditional Generative Adversarial Network and a Correlation-Type Similarity Measure

    No full text
    The automatic registration of multisensor remote sensing images is a highly challenging task due to the inherently different physical, statistical, and textural characteristics of the input data. Information-theoretic measures are often used to favor comparing local intensity distributions in the images. In this paper, a novel method based on the combination of a deep learning architecture and a correlation-type area-based functional is proposed for the registration of a multisensor pair of images, including an optical image and a synthetic aperture radar (SAR) image. The method makes use of a conditional generative adversarial network (cGAN) in order to address image-to-image translation across the optical and SAR data sources. Then, once the optical and SAR data are brought to a common domain, an area-based ℓ2 similarity measure is used together with the COBYLA constrained maximization algorithm for registration purposes. While correlation-type functionals are usually ineffective in the application to multisensor registration, exploiting the image-to-image translation capabilities of cGAN architectures allows moving the complexity of the comparison to the domain adaptation step, thus enabling the use of a simple ℓ2 similarity measure, favoring high computational efficiency, and opening the possibility to process a large amount of data at runtime. Experiments with multispectral and panchromatic optical data combined with SAR images suggest the effectiveness of this strategy and the capability of the proposed method to achieve more accurate registration as compared to state-of-the-art approaches

    Registration of Multisensor Images through a Conditional Generative Adversarial Network and a Correlation-Type Similarity Measure

    No full text
    The automatic registration of multisensor remote sensing images is a highly challenging task due to the inherently different physical, statistical, and textural characteristics of the input data. Information-theoretic measures are often used to favor comparing local intensity distributions in the images. In this paper, a novel method based on the combination of a deep learning architecture and a correlation-type area-based functional is proposed for the registration of a multisensor pair of images, including an optical image and a synthetic aperture radar (SAR) image. The method makes use of a conditional generative adversarial network (cGAN) in order to address image-to-image translation across the optical and SAR data sources. Then, once the optical and SAR data are brought to a common domain, an area-based ℓ2 similarity measure is used together with the COBYLA constrained maximization algorithm for registration purposes. While correlation-type functionals are usually ineffective in the application to multisensor registration, exploiting the image-to-image translation capabilities of cGAN architectures allows moving the complexity of the comparison to the domain adaptation step, thus enabling the use of a simple ℓ2 similarity measure, favoring high computational efficiency, and opening the possibility to process a large amount of data at runtime. Experiments with multispectral and panchromatic optical data combined with SAR images suggest the effectiveness of this strategy and the capability of the proposed method to achieve more accurate registration as compared to state-of-the-art approaches

    Supervised classification of thermal infrared hyperspectral images through Bayesian, Markovian, and region-based approaches

    No full text
    Hyperspectral images in the thermal infrared range are attracting increasing attention in the remote sensing field. Nonetheless, the generation of land cover maps using this innovative kind of remote sensing data has been scarcely studied so far. The aim of this article is to experimentally investigate the potential of various supervised classification approaches to land cover mapping from high spatial resolution thermal hyperspectral images. The considered methods include both non-contextual and spatial-contextual classifiers, and encompass methodological approaches based on Bayesian decision theory, Markov random fields, multiscale region-based analysis, and Bayesian feature reduction. Experiments were conducted with a challenging data set associated with a complex urban and vegetated scene. Overall accurate results were achieved by using contextual approaches. The validation suggested the effectiveness of pattern recognition tools in the application to this innovative typology of remote sensing data while also indicating potential improvements through the fusion with physically-based methods

    Compounding Approaches for Wind Prediction From Underwater Noise by Supervised Learning

    No full text
    Wind speed and its evolution over time and space are important for many purposes, particularly for climate studies. However, direct measurement at the ocean surface is problematic, and remote prediction using satellite and radar systems rarely offers the desired resolution or accuracy, especially in polar waters. Being able to predict wind speed from underwater acoustic noise is, therefore, a key aspect to improving climate models and monitoring systems. To date, empirical equations using the noise spectrum at a given frequency, fitted to the available data, have been adopted. To leverage on the whole noise spectrum, only recently supervised machine-learning regression has been proposed. This article exploits the concurrent acquisition of underwater noise and anemometer measurements over a period of 16 months, at 10-min steps, to demonstrate the superiority of regression models based on supervised learning over empirical equations. It is also shown that depending on the type of compounding implemented, different tradeoffs are achieved between accuracy and temporal resolution. Considering all samples in the data set, including those in which rainfall and passing ships occurred, and separating by cross-validation the samples used for training from those used for testing, it is shown that a regressor based on the random forest technique, followed by the compounding of the predictions over an interval of 1 h, provides a mean absolute error of 0.62 m/s and a correlation coefficient of 0.96. In addition, the robustness with respect to the parameter setting and the performance obtained in performing prediction in chronological order are evaluated

    EXPERIMENTAL COMPARISON OF REGISTRATION METHODS FOR MULTISENSOR SAR-OPTICAL DATA

    No full text
    International audienceSynthetic aperture radar (SAR) and optical satellite image registration is a field that developed in the last decades and gave rise to a great number of approaches. The registration process is composed of several steps: feature definition, feature comparison and optimization of a geometric transformation between the images. Feature definition can be done using simple traditional filtering or more complex deep learning (DL) methods. In this paper, two traditional approaches and a DL approach are compared. One can then wonder if the complexity of DL is worth to address the registration task. The aim of this paper is to quantitatively compare approaches rooted in distinct methodological areas on two common datasets with different resolutions. The comparison suggests that, although more complex, the DL approach is more precise than traditional methods
    corecore